Stochastic variance-reduced prox-linear algorithms for nonconvex composite optimization

نویسندگان

چکیده

We consider the problem of minimizing composite functions form $$f(g(x))+h(x)$$ , where f and h are convex (which can be nonsmooth) g is a smooth vector mapping. In addition, we assume that average finite number component mappings or expectation over family random mappings. propose class stochastic variance-reduced prox-linear algorithms for solving such problems bound their sample complexities finding an $$\epsilon $$ -stationary point in terms total evaluations Jacobians. When N components, obtain complexity $${\mathcal {O}}(N+ N^{4/5}\epsilon ^{-1})$$ both mapping Jacobian evaluations. general expectation, {O}}(\epsilon ^{-5/2})$$ ^{-3/2})$$ Jacobians respectively. If addition smooth, then improved {O}}(N+N^{1/2}\epsilon derived being respectively,

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Stochastic Variance Reduced Optimization for Nonconvex Sparse Learning

We propose a stochastic variance reduced optimization algorithm for solving a class of large-scale nonconvex optimization problems with cardinality constraints. Theoretically, we provide sufficient conditions under which the proposed algorithm enjoys strong linear convergence guarantees and optimal estimation accuracy in high dimensions. We further extend the analysis to its asynchronous varian...

متن کامل

Stochastic Variance Reduction for Nonconvex Optimization

We study nonconvex finite-sum problems and analyze stochastic variance reduced gradient (Svrg) methods for them. Svrg and related methods have recently surged into prominence for convex optimization given their edge over stochastic gradient descent (Sgd); but their theoretical analysis almost exclusively assumes convexity. In contrast, we prove non-asymptotic rates of convergence (to stationary...

متن کامل

Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization

This paper considers a class of constrained stochastic composite optimization problems whose objective function is given by the summation of a differentiable (possibly nonconvex) component, together with a certain non-differentiable (but convex) component. In order to solve these problems, we propose a randomized stochastic projected gradient (RSPG) algorithm, in which proper mini-batch of samp...

متن کامل

Sample Complexity of Stochastic Variance-Reduced Cubic Regularization for Nonconvex Optimization

The popular cubic regularization (CR) method converges with firstand second-order optimality guarantee for nonconvex optimization, but encounters a high sample complexity issue for solving large-scale problems. Various sub-sampling variants of CR have been proposed to improve the sample complexity. In this paper, we propose a stochastic variance-reduced cubic-regularized (SVRC) Newton’s method ...

متن کامل

Parallel Asynchronous Stochastic Variance Reduction for Nonconvex Optimization

Nowadays, asynchronous parallel algorithms have received much attention in the optimization field due to the crucial demands for modern large-scale optimization problems. However, most asynchronous algorithms focus on convex problems. Analysis on nonconvex problems is lacking. For the Asynchronous Stochastic Descent (ASGD) algorithm, the best result from (Lian et al., 2015) can only achieve an ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Mathematical Programming

سال: 2021

ISSN: ['0025-5610', '1436-4646']

DOI: https://doi.org/10.1007/s10107-021-01709-z